adversarial ml
SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network Intrusion Detection
Vitorino, João, Praça, Isabel, Maia, Eva
Machine Learning (ML) can be incredibly valuable to automate anomaly detection and cyber-attack classification, improving the way that Network Intrusion Detection (NID) is performed. However, despite the benefits of ML models, they are highly susceptible to adversarial cyber-attack examples specifically crafted to exploit them. A wide range of adversarial attacks have been created and researchers have worked on various defense strategies to safeguard ML models, but most were not intended for the specific constraints of a communication network and its communication protocols, so they may lead to unrealistic examples in the NID domain. This Systematization of Knowledge (SoK) consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples and could be used in real ML development and deployment scenarios with real network traffic flows. This SoK also describes the open challenges regarding the use of adversarial ML in the NID domain, defines the fundamental properties that are required for an adversarial example to be realistic, and provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland (0.04)
- Europe > Portugal > Porto > Porto (0.04)
- (2 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.46)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.72)
Reinventing adversarial machine learning: adversarial ML from scratch
I think this might be a half-decent motivation! I want to explain why I think adversarial ML is so interesting. To give it context, let's start with a ludicrous party question: is a Pop-Tart a ravioli? Let's unpack why the question makes for a fun debate among friends. The question "is Chef Boyardee ravioli?" makes for less entertaining banter because we all agree (minus the occasional food snobs).
Adversarial Machine Learning -- Industry Perspectives
Kumar, Ram Shankar Siva, Nyström, Magnus, Lambert, John, Marshall, Andrew, Goertzel, Mario, Comissoneru, Andi, Swann, Matt, Xia, Sharon
Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems. We leverage the insights from the interviews and we enumerate the gaps in perspective in securing machine learning systems when viewed in the context of traditional software security development. We write this paper from the perspective of two personas: developers/ML engineers and security incident responders who are tasked with securing ML systems as they are designed, developed and deployed ML systems. The goal of this paper is to engage researchers to revise and amend the Security Development Lifecycle for industrial-grade software in the adversarial ML era.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe (0.04)
- Questionnaire & Opinion Survey (0.48)
- Research Report (0.40)
Adversarial ML: How AI is Enabling Cyber Resilience
Machine learning enables us to correctly classify a file as either benign or malicious over 99% of the time. But the question then becomes, how can this classifier be attacked? Is it possible to alter the file in such a way as to trick the classifier? We often make the mistake of assuming the model is judging as we judge, i.e., we assume the machine learning model has baked into it a conceptual understanding of the objects being classified. For example, let's look at lie detectors.
Applying ML to InfoSec: Adversarial ML
Some of these sources (like log formats) are readily available and fairly standardized while others will require extensive tooling and software modifications (e.g. Bearing in mind that the whole point of machine learning is generalization beyond the training set, thoughtful feature engineering is required to go from the identity information of IP addresses, hostnames and URLs to something that can turn into a useful representation within the machine learning model.
Applying ML to InfoSec: Adversarial ML
Some of these sources (like log formats) are readily available and fairly standardized while others will require extensive tooling and software modifications (e.g. Bearing in mind that the whole point of machine learning is generalization beyond the training set, thoughtful feature engineering is required to go from the identity information of IP addresses, hostnames and URLs to something that can turn into a useful representation within the machine learning model.
Formulation of Adversarial ML
Machine learning is being used in a variety of domains to restrict or prevent undesirable behaviors by hackers, fraudsters and even ordinary users. Algorithms deployed for fraud prevention, network security, anti-money laundering belong to the broad area of adversarial machine learning where instead of ML trying to learn the patterns of benevolent nature, it is confronted with a malicious adversary that is looking for opportunities to exploit loopholes and weaknesses for personal gain.
Formulation of Adversarial ML
Machine learning is being used in a variety of domains to restrict or prevent undesirable behaviors by hackers, fraudsters and even ordinary users. Algorithms deployed for fraud prevention, network security, anti-money laundering belong to the broad area of adversarial machine learning where instead of ML trying to learn the patterns of benevolent nature, it is confronted with a malicious adversary that is looking for opportunities to exploit loopholes and weaknesses for personal gain. To evade these models an attacker needs to arm themselves with knowledge of the algorithm, feature space and the training data. Attackers have to obtain this information through a limited number of probing opportunities. Designing the feature space for adversarial models is highly dependent on the use case and what limitations you wish to place on the adversary.
Formulation of Adversarial ML
Machine learning is being used in a variety of domains to restrict or prevent undesirable behaviors by hackers, fraudsters and even ordinary users. Algorithms deployed for fraud prevention, network security, anti-money laundering belong to the broad area of adversarial machine learning where instead of ML trying to learn the patterns of benevolent nature, it is confronted with a malicious adversary that is looking for opportunities to exploit loopholes and weaknesses for personal gain. To evade these models an attacker needs to arm themselves with knowledge of the algorithm, feature space and the training data. Attackers have to obtain this information through a limited number of probing opportunities. Designing the feature space for adversarial models is highly dependent on the use case and what limitations you wish to place on the adversary.